91 research outputs found
Projective Ranking-based GNN Evasion Attacks
Graph neural networks (GNNs) offer promising learning methods for
graph-related tasks. However, GNNs are at risk of adversarial attacks. Two
primary limitations of the current evasion attack methods are highlighted: (1)
The current GradArgmax ignores the "long-term" benefit of the perturbation. It
is faced with zero-gradient and invalid benefit estimates in certain
situations. (2) In the reinforcement learning-based attack methods, the learned
attack strategies might not be transferable when the attack budget changes. To
this end, we first formulate the perturbation space and propose an evaluation
framework and the projective ranking method. We aim to learn a powerful attack
strategy then adapt it as little as possible to generate adversarial samples
under dynamic budget settings. In our method, based on mutual information, we
rank and assess the attack benefits of each perturbation for an effective
attack strategy. By projecting the strategy, our method dramatically minimizes
the cost of learning a new attack strategy when the attack budget changes. In
the comparative assessment with GradArgmax and RL-S2V, the results show our
method owns high attack performance and effective transferability. The
visualization of our method also reveals various attack patterns in the
generation of adversarial samples.Comment: Accepted by IEEE Transactions on Knowledge and Data Engineerin
Leakage-Abuse Attacks Against Forward and Backward Private Searchable Symmetric Encryption
Dynamic searchable symmetric encryption (DSSE) enables a server to
efficiently search and update over encrypted files. To minimize the leakage
during updates, a security notion named forward and backward privacy is
expected for newly proposed DSSE schemes. Those schemes are generally
constructed in a way to break the linkability across search and update queries
to a given keyword. However, it remains underexplored whether forward and
backward private DSSE is resilient against practical leakage-abuse attacks
(LAAs), where an attacker attempts to recover query keywords from the leakage
passively collected during queries.
In this paper, we aim to be the first to answer this question firmly through
two non-trivial efforts. First, we revisit the spectrum of forward and backward
private DSSE schemes over the past few years, and unveil some inherent
constructional limitations in most schemes. Those limitations allow attackers
to exploit query equality and establish a guaranteed linkage among different
(refreshed) query tokens surjective to a candidate keyword. Second, we refine
volumetric leakage profiles of updates and queries by associating each with a
specific operation. By further exploiting update volume and query response
volume, we demonstrate that all forward and backward private DSSE schemes can
leak the same volumetric information (e.g., insertion volume, deletion volume)
as those without such security guarantees. To testify our findings, we realize
two generic LAAs, i.e., frequency matching attack and volumetric inference
attack, and we evaluate them over various experimental settings in the dynamic
context. Finally, we call for new efficient schemes to protect query equality
and volumetric information across search and update queries.Comment: A short version of this paper has been accepted to the 30th ACM
Conference on Computer and Communications Security (CCS'23
RC-SSFL: Towards Robust and Communication-efficient Semi-supervised Federated Learning System
Federated Learning (FL) is an emerging decentralized artificial intelligence
paradigm, which promises to train a shared global model in high-quality while
protecting user data privacy. However, the current systems rely heavily on a
strong assumption: all clients have a wealth of ground truth labeled data,
which may not be always feasible in the real life. In this paper, we present a
practical Robust, and Communication-efficient Semi-supervised FL (RC-SSFL)
system design that can enable the clients to jointly learn a high-quality model
that is comparable to typical FL's performance. In this setting, we assume that
the client has only unlabeled data and the server has a limited amount of
labeled data. Besides, we consider malicious clients can launch poisoning
attacks to harm the performance of the global model. To solve this issue,
RC-SSFL employs a minimax optimization-based client selection strategy to
select the clients who hold high-quality updates and uses geometric median
aggregation to robustly aggregate model updates. Furthermore, RC-SSFL
implements a novel symmetric quantization method to greatly improve
communication efficiency. Extensive case studies on two real-world datasets
demonstrate that RC-SSFL can maintain the performance comparable to typical FL
in the presence of poisoning attacks and reduce communication overhead by
A Survey on Federated Unlearning: Challenges, Methods, and Future Directions
In recent years, the notion of ``the right to be forgotten" (RTBF) has
evolved into a fundamental element of data privacy regulations, affording
individuals the ability to request the removal of their personal data from
digital records. Consequently, given the extensive adoption of data-intensive
machine learning (ML) algorithms and increasing concerns for personal data
privacy protection, the concept of machine unlearning (MU) has gained
considerable attention. MU empowers an ML model to selectively eliminate
sensitive or personally identifiable information it acquired during the
training process. Evolving from the foundational principles of MU, federated
unlearning (FU) has emerged to confront the challenge of data erasure within
the domain of federated learning (FL) settings. This empowers the FL model to
unlearn an FL client or identifiable information pertaining to the client while
preserving the integrity of the decentralized learning process. Nevertheless,
unlike traditional MU, the distinctive attributes of federated learning
introduce specific challenges for FU techniques. These challenges lead to the
need for tailored design when designing FU algorithms. Therefore, this
comprehensive survey delves into the techniques, methodologies, and recent
advancements in federated unlearning. It provides an overview of fundamental
concepts and principles, evaluates existing federated unlearning algorithms,
reviews optimizations tailored to federated learning, engages in discussions
regarding practical applications, along with an assessment of their
limitations, and outlines promising directions for future research
Leia: A Lightweight Cryptographic Neural Network Inference System at the Edge
The advances in machine learning have revealed its great potential for emerging mobile applications such as face recognition and voice assistant. Models trained via a Neural Network (NN) can offer accurate and efficient inference services for mobile users. Unfortunately, the current deployment of such service encounters privacy concerns. Directly offloading the model to the mobile device violates model privacy of the model owner, while feeding user input to the service compromises user privacy. To address this issue, we propose, tailor, and evaluate Leia, a lightweight cryptographic NN inference system at the edge. Unlike prior cryptographic NN inference systems, Leia is designed with two mobile-friendly perspectives. First, Leia leverages the paradigm of edge computing wherein the inference procedure keeps the model closer to the mobile user to foster low latency service. Specifically, Leia\u27s architecture consists of two non-colluding edge services to obliviously perform NN inference on the encoded user data and model. Second, Leia\u27s realization makes the judicious use of potentially constrained computational and communication resources in edge devices. In particular, Leia adapts the Binarized Neural Network (BNN), a trending flavor of NN model with low memory footprint and computational cost, and purely chooses the lightweight secret sharing techniques to develop secure blocks of BNN. Empirical validation executed on Raspberry Pi confirms the practicality of Leia, showing that Leia can produce a prediction result with 97% accuracy by 4 seconds in the edge environment
- …